## [1] 1.857912e-08
There are clear run failure thresholds in the parameters rootd_ft_io and lai_max, and quite strong visual indications that a_wl_io and bio_hum_cn matter.
At the moment, we’ll just keep to the things that we know should work. We’ll use mean NPP as an example
Andy would like to see timeseries of:
cVeg, cSoil and nbp, npp in GtC and GtC/yr.
It appears that this ensemble is less “clear cut” in having an output that clearly distinguishes between “failed” (or close to it), and “not failed”. Having said that, having an F0 over a threshold seems to kill the carbon cycle, as before.
##
## optimisation start
## ------------------
## * estimation method : MLE
## * optimisation method : BFGS
## * analytical gradient : used
## * trend model : ~alpha_io + a_wl_io + bio_hum_cn + b_wl_io + dcatch_dlai_io +
## dqcrit_io + dz0v_dh_io + f0_io + fd_io + g_area_io + g_root_io +
## g_wood_io + gs_nvg_io + hw_sw_io + kaps_roth + knl_io + lai_max_io +
## lai_min_io + lma_io + n_inorg_turnover + nmass_io + nr_io +
## retran_l_io + retran_r_io + r_grow_io + rootd_ft_io + sigl_io +
## sorp + tleaf_of_io + tlow_io + tupp_io + l_vg_soil
## * covariance model :
## - type : matern5_2
## - nugget : NO
## - parameters lower bounds : 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10
## - parameters upper bounds : 2 2 1.999244 2 1.995574 2 1.998154 1.790352 2 2 2 1.998084 2 2 2 2 2 1.999201 2 1.996693 1.998361 2 2 1.995738 2 1.993455 1.995643 1.998832 2 2 2 2
## - best initial criterion value(s) : -6187.415
##
## N = 32, M = 5 machine precision = 2.22045e-16
## At X0, 0 variables are exactly at the bounds
## At iterate 0 f= 6187.4 |proj g|= 1.7325
## At iterate 1 f = 6089.3 |proj g|= 1.5612
## At iterate 2 f = 6083.7 |proj g|= 1.9398
## At iterate 3 f = 6073.5 |proj g|= 1.8895
## At iterate 4 f = 6069.8 |proj g|= 1.3248
## At iterate 5 f = 6063.8 |proj g|= 1.3904
## At iterate 6 f = 6060.3 |proj g|= 1.8952
## At iterate 7 f = 6059.7 |proj g|= 1.8906
## At iterate 8 f = 6059.4 |proj g|= 1.3545
## At iterate 9 f = 6057.6 |proj g|= 1.3364
## At iterate 10 f = 6056.6 |proj g|= 1.3259
## At iterate 11 f = 6056 |proj g|= 1.3168
## At iterate 12 f = 6055.7 |proj g|= 1.8942
## At iterate 13 f = 6055.6 |proj g|= 1.8925
## At iterate 14 f = 6055.4 |proj g|= 1.3007
## At iterate 15 f = 6055.3 |proj g|= 1.2973
## At iterate 16 f = 6055.1 |proj g|= 1.7414
## At iterate 17 f = 6054.8 |proj g|= 1.8918
## At iterate 18 f = 6054.4 |proj g|= 1.2484
## At iterate 19 f = 6054.1 |proj g|= 1.2204
## At iterate 20 f = 6053.9 |proj g|= 1.891
## At iterate 21 f = 6053.8 |proj g|= 1.8916
## At iterate 22 f = 6053.4 |proj g|= 1.8904
## At iterate 23 f = 6052.9 |proj g|= 1.1197
## At iterate 24 f = 6052.5 |proj g|= 0.90512
## At iterate 25 f = 6052.4 |proj g|= 1.888
## At iterate 26 f = 6052.4 |proj g|= 1.8871
## At iterate 27 f = 6052.3 |proj g|= 0.81344
## At iterate 28 f = 6052.3 |proj g|= 0.82086
## At iterate 29 f = 6052.2 |proj g|= 0.81688
## At iterate 30 f = 6052.2 |proj g|= 0.80176
## At iterate 31 f = 6052.1 |proj g|= 1.8877
## At iterate 32 f = 6052.1 |proj g|= 1.8885
## At iterate 33 f = 6052.1 |proj g|= 1.8892
## At iterate 34 f = 6052 |proj g|= 1.887
## At iterate 35 f = 6052 |proj g|= 0.8794
## At iterate 36 f = 6052 |proj g|= 0.91678
## At iterate 37 f = 6052 |proj g|= 0.88887
## At iterate 38 f = 6051.9 |proj g|= 0.90052
## At iterate 39 f = 6051.9 |proj g|= 1.0186
## At iterate 40 f = 6051.9 |proj g|= 1.8873
## At iterate 41 f = 6051.9 |proj g|= 1.8887
## At iterate 42 f = 6051.9 |proj g|= 1.8901
## At iterate 43 f = 6051.8 |proj g|= 1.891
## At iterate 44 f = 6051.8 |proj g|= 0.97016
## At iterate 45 f = 6051.8 |proj g|= 1.2852
## At iterate 46 f = 6051.7 |proj g|= 1.8861
## At iterate 47 f = 6051.7 |proj g|= 0.36741
## At iterate 48 f = 6051.7 |proj g|= 0.36999
## At iterate 49 f = 6051.7 |proj g|= 0.37269
## At iterate 50 f = 6051.7 |proj g|= 0.36124
## At iterate 51 f = 6051.7 |proj g|= 0.3168
## At iterate 52 f = 6051.7 |proj g|= 0.32754
## At iterate 53 f = 6051.7 |proj g|= 0.27674
## At iterate 54 f = 6051.7 |proj g|= 0.35866
## At iterate 55 f = 6051.7 |proj g|= 0.41074
## At iterate 56 f = 6051.7 |proj g|= 0.12815
## At iterate 57 f = 6051.7 |proj g|= 0.1118
## At iterate 58 f = 6051.7 |proj g|= 0.079767
## At iterate 59 f = 6051.7 |proj g|= 0.11687
## At iterate 60 f = 6051.7 |proj g|= 0.11674
## At iterate 61 f = 6051.7 |proj g|= 0.048776
## At iterate 62 f = 6051.7 |proj g|= 0.049282
## At iterate 63 f = 6051.7 |proj g|= 0.020382
##
## iterations 63
## function evaluations 79
## segments explored during Cauchy searches 86
## BFGS updates skipped 0
## active bounds at final generalized Cauchy point 25
## norm of the final projected gradient 0.0203815
## final function value 6051.72
##
## F = 6051.72
## final value 6051.720359
## converged
The emulators appear to be at least capturing the broad response for all of the output variables.
## [1] 10.082
Monte Carlo filtering (MCF) gives another form of sensitivity metric. MCF splits the input sample into two parts - those that do and do not meet some threshold, for example, and then examines the differences of the resulting parameter distributions. A useful guide to MCF can be found in [Pianosi et al (2016)]https://www.sciencedirect.com/science/article/pii/S1364815216300287
We use the emulated input samples that are ruled out and NROY with the initial constraint.
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties
## Warning in ks.test(X.ro[, i], X.nroy[, i]): p-value will be approximate in
## the presence of ties